Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: handle warnings generated by our own tests #1469

Conversation

tonyandrewmeyer
Copy link
Contributor

Our tests currently end with a very large list of warnings. This PR fixes that:

  • All of the Harness tests emit a PendingDeprecationWarning telling us that Harness is going away and we should use Scenario. This seems like something to simply ignore in our own tests, so I've added a -W argument to Python to silence them in bulk.
  • One test_main test raises a warning about controller storage going away - there's another test that tests that warning happens, and the two tests are otherwise identical. I've merged the two - we don't need them both.
  • Two of the secret tests check that SecretInfo objects can still be created without providing the model UUID. I've added a pytest.warns to (a) ensure that there's the warning, and (b) stop it appearing in the tox output.
  • Scenario should provide the model UUID when creating SecretInfo objects. This is a one-line fix, so I've added it here - it stops deprecation warnings, but is really more the code than the tests, but since it's so small I think it's ok here.
  • The context.on Scenario tests check that pre- and post- series upgrade events work, and those raise warnings. I've split them off to a separate test that verifies (and silences) the warning.

Fixes #1408

testing/tests/test_context_on.py Outdated Show resolved Hide resolved
@james-garner-canonical james-garner-canonical dismissed their stale review November 27, 2024 21:12

Some tests are failing

@james-garner-canonical
Copy link
Contributor

Validate PR title: I really think that tests is a better commit type than test, because to me the latter always feels like "this is just a test, not a for real". But here we are ...

Unit tests: looks like catch_warnings fails on older pythons, I guess the __init__ form changed? This makes me think, we should update CI to not have unit tests cancel each other -- it's possible for them to pass on some python versions and fail on others, and it's good to know which ones -- and they're cheap and fast to run so cancelling doesn't really save much anyway.

@tonyandrewmeyer
Copy link
Contributor Author

tonyandrewmeyer commented Nov 27, 2024

Validate PR title: I really think that tests is a better commit type than test, because to me the latter always feels like "this is just a test, not a for real". But here we are ...

I feel like I get this wrong about 50% of the time 😆.

Unit tests: looks like catch_warnings fails on older pythons, I guess the __init__ form changed?

Yeah, a bunch of parameters were added in 3.11, and I'd forgotten that. Maybe I ought to default to running tox with 3.8 locally - it's much more likely that it would catch things than using newer Python.

This makes me think, we should update CI to not have unit tests cancel each other -- it's possible for them to pass on some python versions and fail on others, and it's good to know which ones -- and they're cheap and fast to run so cancelling doesn't really save much anyway.

Interesting thought - feel free to add it to the team discussion topics. I do agree that at times (like this one) it's more informative to immediately see "ah, this is bad on [3.8 or macOS or ..]". At other times, when something is just generally wrong (hopefully only drafts, if one is properly running tox locally) it does feel wasteful to do all the matrix when it's going to fail every time.

@tonyandrewmeyer tonyandrewmeyer changed the title tests: handle warnings generated by our own tests test: handle warnings generated by our own tests Nov 27, 2024
Copy link
Contributor

@dimaqq dimaqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish there was a more elegant way, but I guess given that Harness is deprecated, this will do.

The downside is that we won't notice if Harness is accidentally used in new tests, isn't it? I supposed that would get caught by human review.

@tonyandrewmeyer tonyandrewmeyer merged commit df856d2 into canonical:main Dec 16, 2024
29 of 30 checks passed
@tonyandrewmeyer tonyandrewmeyer deleted the no-harness-to-scenario-warnings-1408 branch December 16, 2024 02:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ignore Harness PendingDeprecationWarnings in our own tests
4 participants